perm filename MARK.1[LET,JMC] blob
sn#850886 filedate 1987-12-28 generic text, type T, neo UTF8
Dr. Hans Mark, Chancellor
University of Texas System
601 Colorado
Austin, TX 787
Dear Hans:
Thanks for your article on SDI. I agree with it, except
possibly for the part that concerns my field --- artificial intelligence.
My impression, from Lowell Wood and others, is that present plans
for the computer control of SDI do not presently involve AI. I also
don't know anyone in AI working on SDI, although there may well
be some. Moreover, it isn't apparent to me that AI is essential
in order that SDI work successfully, although I'm not certain about
that either.
My concern is that opponents of SDI will seize on such remarks
to attack its feasibility. They may be able to verify that nothing
significant is going on to apply AI to SDI and collect a bunch of
signatures to that effect and take your and similar statements as
``admissions'' from the SDI side that SDI can't work without it.
My opinion, not based on detailed study, is that the computer
part of SDI is feasible without extensive AI, although AI technology
may contribute improvements. I have examined the arguments against
the feasibility of the computer part of SDI, and I regard them as
insubstantial. They list some genuine difficulties in achieving
reliability, and say they can't be overcome. Proponents of SDI can
say, ``Yes, those are difficulties, but we expect to overcome them''.
Until there is an overall system design proposal, that's about all
either side can say. The arguments haven't involved AI, except that
some SDI advocate, I forget who, in response to Parnas's arguments
against the computer part of SDI appealed to AI as the savior. This
was unwise; Parnas's arguments against a conventional system are
insubstantial, as I once said in a letter to some Congresmen solicited
by Bob Jastrow.
As I mentioned to you at our lunch, there is one relevant study
I think should be done. Many previous defense and space programs have
involved computing. We can investigate the extent to which difficulties
with the software contributed to cost overruns, program delays and
disasters. My impression is that this extent will turn out to be small.
Moreover, software is usually one of the smaller costs, and this would
almost certainly be the case with SDI. Therefore, there is a ``safety
margin'' that could be used to put more money and effort into software
if it turned out to be critical and difficult in the SDI case.
I recently read Feynman's appendix to the Rogers Commission
report on the Challenger disaster. Feynman studied the engineering
validation of the solid booster, the main engines and the avionics and
computing. His conclusion was that only in the avionics and computing
could one say that validation standards had not been compromised in
order to meet schedules.
Many thanks for the lunch and the discussions.
Sincerely,